#ChatGPT App Development Company
Explore tagged Tumblr posts
smtlabsio · 1 year ago
Text
Tumblr media
If you are looking to elevate user engagement with cutting-edge ChatGPT applications! SMT Labs, the best ChatGPT Application Development Company, brings expertise and creativity to transform your ideas into conversational masterpieces. We're the premier ChatGPT Application Development Company at SMT Labs, dedicated to crafting conversational masterpieces that captivate and engage your audience. We combine cutting-edge technology with uncontrolled creativity to bring your ideas to life, altering how users interact with your brand.
0 notes
cryptocurrencydevelop · 1 year ago
Text
Elevating Communication: The Art of Developing ChatGPT Applications
Tumblr media
ChatGPT - An Overview
ChatGPT, short for Chat Generative Pre-Trained Transformer, was built by a popular AI research company called Open AI. This technology, functioning as an AI chatbot, has the capability to comprehend natural human language and formulate appropriate responses. In simple terms, you can pose a question to ChatGPT and it will provide you with an answer. It doesn't just answer questions, you can also ask it to help plan a fun day in a tourist city, write a computer code, or solve mathematical equations. The tool will likely provide you with the information you're looking for.
Business Benefits Of Launching ChatGPT Like Application:
Innovative Collaboration: It fosters collaborative efforts within teams by assisting in brainstorming sessions, idea generation, and facilitating communication, enhancing overall productivity.
Data Insights: The platform can generate valuable insights by analyzing user interactions, helping businesses understand customer preferences, pain points, and trends for informed decision-making.
Time and Cost Savings: Automation of tasks through a ChatGPT-like platform reduces the time spent on routine queries, allowing businesses to allocate resources more efficiently and potentially lowering operational costs.
Efficient Customer Support: Businesses can use the platform to streamline customer support processes, addressing queries and issues effectively, resulting in improved service efficiency.
Enhanced Customer Engagement: A platform like ChatGPT improves customer interactions by providing intelligent and prompt responses, leading to increased satisfaction and engagement.
Extensive Money Making Options: Launching a ChatGPT-like platform helps in generating huge revenue through various monetization strategies.
These wide range of benefits has attracted numerous entrepreneurs looking to launch an AI tool similar to ChatGPT. If you are one among them, then hire an expert team of AI developers to launch your own chatbot like ChatGPT.
ChatGPT Application Development
For businesses and individuals seeking skilled ChatGPT developers, Developcoins is the ideal choice. As a prominent AI development company, we can assist you in creating an exceptional AI Chatbot system that surpasses your expectations. Committed to excellence, we bring innovation, reliability, and personalized solutions to our AI-based chatbot development services. To know more in detail about the perks of our ChatGPT Application Development, connect with our experts now.
0 notes
thetatechnolabsusa · 2 days ago
Text
How DeepSeek AI & ChatGPT Are Transforming the Future of Artificial Intelligence
Explore how DeepSeek AI and ChatGPT are transforming industries with powerful AI solutions. Learn how Theta Technolabs, a leading AI development company in Dallas, empowers businesses with custom AI software, web app development, and machine learning solutions.
0 notes
atcuality3 · 21 days ago
Text
Unlock New Possibilities with Atcuality's AI-Driven Tech
Every business faces the challenge of standing out in a crowded digital marketplace. At Atcuality, we help you break through the noise with smarter, AI-driven tools built for impact. One of our most powerful offerings is generative AI, which allows you to generate hyper-personalized experiences, streamline content production, and even automate design elements across multiple channels. With Atcuality’s expertise, you’ll move from static templates to dynamic, data-informed creativity that scales. From startups to enterprises, our solutions are customized to your goals, unlocking a future where innovation never stops and your brand becomes truly unforgettable.
Tumblr media
0 notes
atcuality1 · 5 months ago
Text
Advanced WordPress Security Services for Peace of Mind
A secure website is essential for any business, whether you’re running an e-commerce platform or a blog. At Atcuality, we provide advanced WordPress security services tailored to your specific needs. Our solutions include malware removal, vulnerability scanning, and backups to safeguard your data. We also offer 24/7 monitoring to ensure immediate response to any security breaches. With a focus on providing end-to-end protection, we help businesses maintain credibility and uptime. Partner with us and experience unmatched security for your WordPress site, ensuring that you stay one step ahead of cyber threats.
1 note · View note
techgropse0 · 1 year ago
Text
1 note · View note
oliviaemiley444 · 1 year ago
Text
Fintech Fun: Exploring the Cool Benefits of Fintech and Chatbots
Introduction:
Hey there little buddies! Today, we're going on a super cool adventure to explore something called "Fintech." It's like a special kind of magic that helps us with money, and guess what? Chatbots are our new superhero friends in this fintech world! Let's find out what fintech is, how chatbots help, and why it's all so awesome.
 What is Fintech? 
Fintech is like having a superhero friend for your money. It stands for "financial technology," and it's all about using smart computer tricks to make handling money way more fun and easy. Imagine your piggy bank getting a digital upgrade – that's fintech in action!
Tumblr media
The Cool Benefits of Fintech 
1. Digital Piggy Banks:
With fintech, your piggy bank goes digital! It's like having a secret treasure chest inside your tablet or phone. You can see your money growing and even set goals for special treats or toys.
2. Quick Money Moves:
Fintech, along with chatbots, helps you buy things super fast. It's like using a magic spell to get your favorite toys in a blink! No more waiting – just a tap on the screen, and it's yours.
3. Saving Adventures:
Saving money becomes an exciting quest with fintech apps and chatbots. Picture it as a cool game where you collect coins and unlock rewards. Saving for that awesome toy becomes a fun adventure!
4. Learning with Games:
Guess what? Fintech and chatbots aren't just for grown-ups. There are games that teach you about money and how to be a money master. It's like playing your favorite video game while becoming a money superhero!
 Fintech Explorers Club
You can be part of the Fintech Explorers Club! It's like a club where we learn about the future of money together. Imagine being in a secret group where you're the expert in cool fintech and chatbot stuff!
Conclusion 
So, little buddies, fintech is like having a magical friend for your money. Digital piggy banks, quick money moves, saving adventures – fintech and chatbots make handling money a big, awesome adventure! Join the Fintech Explorers Club, and let's explore the future of money and chatbots together!
0 notes
appsdevelopmentservice · 1 year ago
Text
Find out how much it costs to build a chatbot app like ChatGPT. From ChatGPT API license costing to ongoing maintenance costs. Catch it now!
0 notes
perfectiongeeks · 2 years ago
Text
How ChatGPT and Bard are Helping Websites and Apps
Google Search has been the dominant search engine on the Internet for a long time. ChatGPT was the first possible rival to Google Search last year. It is an innovative chatbot developed by OpenAI that aims to revolutionize the technology industry. OpenAI's ChatGPT quickly gained more than a million users in a week after its release. This caused alarm at Google, even though ChatGPT has room to grow. Google's business intelligence services introduced Google Bard in 2023 as a response. Businesses that offer process outsourcing are constantly looking for new ways to grow and improve their businesses. AI and machine-learning technology can now help companies streamline their processes and increase their revenue. Many people believe that these new inventions will soon replace us and threaten our jobs. Here, we'll learn about these AI tools and how they could help your business.
Visit us:
0 notes
smtlabsio · 1 year ago
Text
Tumblr media
SMT Labs is a leading app development company that develops chatbots using ChatGPT technology. Our team of professional developers can create custom chatbots for any business or industry. Our chatbots are designed to be engaging, informative, and helpful, and they can be used to improve customer service, provide product information, and generate leads. If you want to hire the Best ChatGpt App Development Company to automate your customer service, improve your marketing, or increase your sales, then SMT Labs can help you develop a chatbot that meets your needs.
0 notes
ibrinfotech · 2 years ago
Text
IBR Infotech is a leading chat app development company that provides chat app development solutions that meet your business needs. Contact us today.
0 notes
are-we-art-yet · 28 days ago
Note
Is AWAY using it's own program or is this just a voluntary list of guidelines for people using programs like DALL-E? How does AWAY address the environmental concerns of how the companies making those AI programs conduct themselves (energy consumption, exploiting impoverished areas for cheap electricity, destruction of the environment to rapidly build and get the components for data centers etc.)? Are members of AWAY encouraged to contact their gov representatives about IP theft by AI apps?
What is AWAY and how does it work?
AWAY does not "use its own program" in the software sense—rather, we're a diverse collective of ~1000 members that each have their own varying workflows and approaches to art. While some members do use AI as one tool among many, most of the people in the server are actually traditional artists who don't use AI at all, yet are still interested in ethical approaches to new technologies.
Our code of ethics is a set of voluntary guidelines that members agree to follow upon joining. These emphasize ethical AI approaches, (preferably open-source models that can run locally), respecting artists who oppose AI by not training styles on their art, and refusing to use AI to undercut other artists or work for corporations that similarly exploit creative labor.
Environmental Impact in Context
It's important to place environmental concerns about AI in the context of our broader extractive, industrialized society, where there are virtually no "clean" solutions:
The water usage figures for AI data centers (200-740 million liters annually) represent roughly 0.00013% of total U.S. water usage. This is a small fraction compared to industrial agriculture or manufacturing—for example, golf course irrigation alone in the U.S. consumes approximately 2.08 billion gallons of water per day, or about 7.87 trillion liters annually. This makes AI's water usage about 0.01% of just golf course irrigation.
Looking into individual usage, the average American consumes about 26.8 kg of beef annually, which takes around 1,608 megajoules (MJ) of energy to produce. Making 10 ChatGPT queries daily for an entire year (3,650 queries) consumes just 38.1 MJ—about 42 times less energy than eating beef. In fact, a single quarter-pound beef patty takes 651 times more energy to produce than a single AI query.
Overall, power usage specific to AI represents just 4% of total data center power consumption, which itself is a small fraction of global energy usage. Current annual energy usage for data centers is roughly 9-15 TWh globally—comparable to producing a relatively small number of vehicles.
The consumer environmentalism narrative around technology often ignores how imperial exploitation pushes environmental costs onto the Global South. The rare earth minerals needed for computing hardware, the cheap labor for manufacturing, and the toxic waste from electronics disposal disproportionately burden developing nations, while the benefits flow largely to wealthy countries.
While this pattern isn't unique to AI, it is fundamental to our global economic structure. The focus on individual consumer choices (like whether or not one should use AI, for art or otherwise,) distracts from the much larger systemic issues of imperialism, extractive capitalism, and global inequality that drive environmental degradation at a massive scale.
They are not going to stop building the data centers, and they weren't going to even if AI never got invented.
Creative Tools and Environmental Impact
In actuality, all creative practices have some sort of environmental impact in an industrialized society:
Digital art software (such as Photoshop, Blender, etc) generally uses 60-300 watts per hour depending on your computer's specifications. This is typically more energy than dozens, if not hundreds, of AI image generations (maybe even thousands if you are using a particularly low-quality one).
Traditional art supplies rely on similar if not worse scales of resource extraction, chemical processing, and global supply chains, all of which come with their own environmental impact.
Paint production requires roughly thirteen gallons of water to manufacture one gallon of paint.
Many oil paints contain toxic heavy metals and solvents, which have the potential to contaminate ground water.
Synthetic brushes are made from petroleum-based plastics that take centuries to decompose.
That being said, the point of this section isn't to deflect criticism of AI by criticizing other art forms. Rather, it's important to recognize that we live in a society where virtually all artistic avenues have environmental costs. Focusing exclusively on the newest technologies while ignoring the environmental costs of pre-existing tools and practices doesn't help to solve any of the issues with our current or future waste.
The largest environmental problems come not from individual creative choices, but rather from industrial-scale systems, such as:
Industrial manufacturing (responsible for roughly 22% of global emissions)
Industrial agriculture (responsible for roughly 24% of global emissions)
Transportation and logistics networks (responsible for roughly 14% of global emissions)
Making changes on an individual scale, while meaningful on a personal level, can't address systemic issues without broader policy changes and overall restructuring of global economic systems.
Intellectual Property Considerations
AWAY doesn't encourage members to contact government representatives about "IP theft" for multiple reasons:
We acknowledge that copyright law overwhelmingly serves corporate interests rather than individual creators
Creating new "learning rights" or "style rights" would further empower large corporations while harming individual artists and fan creators
Many AWAY members live outside the United States, many of which having been directly damaged by the US, and thus understand that intellectual property regimes are often tools of imperial control that benefit wealthy nations
Instead, we emphasize respect for artists who are protective of their work and style. Our guidelines explicitly prohibit imitating the style of artists who have voiced their distaste for AI, working on an opt-in model that encourages traditional artists to give and subsequently revoke permissions if they see fit. This approach is about respect, not legal enforcement. We are not a pro-copyright group.
In Conclusion
AWAY aims to cultivate thoughtful, ethical engagement with new technologies, while also holding respect for creative communities outside of itself. As a collective, we recognize that real environmental solutions require addressing concepts such as imperial exploitation, extractive capitalism, and corporate power—not just focusing on individual consumer choices, which do little to change the current state of the world we live in.
When discussing environmental impacts, it's important to keep perspective on a relative scale, and to avoid ignoring major issues in favor of smaller ones. We promote balanced discussions based in concrete fact, with the belief that they can lead to meaningful solutions, rather than misplaced outrage that ultimately serves to maintain the status quo.
If this resonates with you, please feel free to join our discord. :)
Works Cited:
USGS Water Use Data: https://www.usgs.gov/mission-areas/water-resources/science/water-use-united-states
Golf Course Superintendents Association of America water usage report: https://www.gcsaa.org/resources/research/golf-course-environmental-profile
Equinix data center water sustainability report: https://www.equinix.com/resources/infopapers/corporate-sustainability-report
Environmental Working Group's Meat Eater's Guide (beef energy calculations): https://www.ewg.org/meateatersguide/
Hugging Face AI energy consumption study: https://huggingface.co/blog/carbon-footprint
International Energy Agency report on data centers: https://www.iea.org/reports/data-centres-and-data-transmission-networks
Goldman Sachs "Generational Growth" report on AI power demand: https://www.goldmansachs.com/intelligence/pages/gs-research/generational-growth-ai-data-centers-and-the-coming-us-power-surge/report.pdf
Artists Network's guide to eco-friendly art practices: https://www.artistsnetwork.com/art-business/how-to-be-an-eco-friendly-artist/
The Earth Chronicles' analysis of art materials: https://earthchronicles.org/artists-ironically-paint-nature-with-harmful-materials/
Natural Earth Paint's environmental impact report: https://naturalearthpaint.com/pages/environmental-impact
Our World in Data's global emissions by sector: https://ourworldindata.org/emissions-by-sector
"The High Cost of High Tech" report on electronics manufacturing: https://goodelectronics.org/the-high-cost-of-high-tech/
"Unearthing the Dirty Secrets of the Clean Energy Transition" (on rare earth mineral mining): https://www.theguardian.com/environment/2023/apr/18/clean-energy-dirty-mining-indigenous-communities-climate-crisis
Electronic Frontier Foundation's position paper on AI and copyright: https://www.eff.org/wp/ai-and-copyright
Creative Commons research on enabling better sharing: https://creativecommons.org/2023/04/24/ai-and-creativity/
209 notes · View notes
tired-pidgeon · 4 months ago
Text
A China-based startup just released DeepSeek, a new AI model that the company said was produced in 2 months for under $6 million. In comparison, Meta alone said it plans to spend $65 Billion on AI this year. OpenAI is spending $100k-$700k a DAY to run their AI models.
DeepSeek is good enough to rival ChatGPT and Anthropic, and has an open-source model
(Source: CNN, watch from 2:38 onward)
Meanwhile, Trump just announced the Stargate Project, an AI investment initiative that includes OpenAI, Arm, Nvidia and Oracle. The project aims to invest $500 billion over the next four years to build data centers across the U.S. that will support AI models and allow them to continue developing
DeepSeek’s launch — it is now the most downloaded app on the App Store, ahead of ChatGPT — caused tech stocks to fall today, but according to tech consultant Shelly Palmer during the linked interview with CNN, American tech companies are likely to rise to this challenge.
The wide disparity in cost and training time between the DeepSeek and other AI models is staggering, and it begs some questions: how did DeepSeek do it faster and cheaper? Are they telling the truth? Why haven’t American firms figured this out? Why are American firms charging so much?
Mr Palmer attributes this to the different ways AI models functions. DeepSeek relies on algorithmic efficiency, while American AI models rely on brute force. Mr Palmer notes that since China has had restricted access to chips and tech (thanks to U.S. sanctions), it has had to find another way to solve the problem.
If I were to take an optimistic perspective, I’d hope that this new model will encourage American companies to step up their game and create even more efficient models. It’s the open market after all. I hope this will result in the reduction of AI’s environmental damage, which is currently proceeding on an unsustainable level. AI can be good or bad, but its current devouring of limited resources is unbearable. I’m glad DeepSeek was able to find a better way to create a more efficient model. Not only that, but since its model is open source, anyone can look at it and learn from it. It could actually prove to be an important springboard for AI technology
If I were to take a pessimistic perspective, the U.S. might take this as a threat instead of an invitation to innovate and win in the free market. TheUS might impose even more isolationist policies, possibly banning tech apps from China and ironically creating its own Great Firewall. In doing so, its people are stuck having to rely on domestic AI models, while China’s influence in the tech sphere grows through the rest of the world. Meanwhile, the US continues to spread Sinophobia and consequently misses out on new tech because it is throwing a tantrum at not having figured out the AI puzzle first, possibly accusing DeepSeek of IP theft
30 notes · View notes
justforbooks · 4 months ago
Text
Tumblr media
The DeepSeek panic reveals an AI world ready to blow❗💥
The R1 chatbot has sent the tech world spinning – but this tells us less about China than it does about western neuroses
The arrival of DeepSeek R1, an AI language model built by the Chinese AI lab DeepSeek, has been nothing less than seismic. The system only launched last week, but already the app has shot to the top of download charts, sparked a $1tn (£800bn) sell-off of tech stocks, and elicited apocalyptic commentary in Silicon Valley. The simplest take on R1 is correct: it’s an AI system equal in capability to state-of-the-art US models that was built on a shoestring budget, thus demonstrating Chinese technological prowess. But the big lesson is perhaps not what DeepSeek R1 reveals about China, but about western neuroses surrounding AI.
For AI obsessives, the arrival of R1 was not a total shock. DeepSeek was founded in 2023 as a subsidiary of the Chinese hedge fund High-Flyer, which focuses on data-heavy financial analysis – a field that demands similar skills to top-end AI research. Its subsidiary lab quickly started producing innovative papers, and CEO Liang Wenfeng told interviewers last November that the work was motivated not by profit but “passion and curiosity”.
This approach has paid off, and last December the company launched DeepSeek-V3, a predecessor of R1 with the same appealing qualities of high performance and low cost. Like ChatGPT, V3 and R1 are large language models (LLMs): chatbots that can be put to a huge variety of uses, from copywriting to coding. Leading AI researcher Andrej Karpathy spotted the company’s potential last year, commenting on the launch of V3: “DeepSeek (Chinese AI co) making it look easy today with an open weights release of a frontier-grade LLM trained on a joke of a budget.” (That quoted budget was $6m – hardly pocket change, but orders of magnitude less than the $100m-plus needed to train OpenAI’s GPT-4 in 2023.)
R1’s impact has been far greater for a few different reasons.
First, it’s what’s known as a “chain of thought” model, which means that when you give it a query, it talks itself through the answer: a simple trick that hugely improves response quality. This has not only made R1 directly comparable to OpenAI’s o1 model (another chain of thought system whose performance R1 rivals) but boosted its ability to answer maths and coding queries – problems that AI experts value highly. Also, R1 is much more accessible. Not only is it free to use via the app (as opposed to the $20 a month you have to pay OpenAI to talk to o1) but it’s totally free for developers to download and implement into their businesses. All of this has meant that R1’s performance has been easier to appreciate, just as ChatGPT’s chat interface made existing AI smarts accessible for the first time in 2022.
Second, the method of R1’s creation undermines Silicon Valley’s current approach to AI. The dominant paradigm in the US is to scale up existing models by simply adding more data and more computing power to achieve greater performance. It’s this approach that has led to huge increases in energy demands for the sector and tied tech companies to politicians. The bill for developing AI is so huge that techies now want to leverage state financing and infrastructure, while politicians want to buy their loyalty and be seen supporting growing companies. (See, for example, Trump’s $500bn “Stargate” announcement earlier this month.) R1 overturns the accepted wisdom that scaling is the way forward. The system is thought to be 95% cheaper than OpenAI’s o1 and uses one tenth of the computing power of another comparable LLM, Meta’s Llama 3.1 model. To achieve equivalent performance at a fraction of the budget is what’s truly shocking about R1, and it’s this that has made its launch so impactful. It suggests that US companies are throwing money away and can be beaten by more nimble competitors.
But after these baseline observations, it gets tricky to say exactly what R1 “means” for AI. Some are arguing that R1’s launch shows we’re overvaluing companies like Nvidia, which makes the chips integral to the scaling paradigm. But it’s also possible the opposite is true: that R1 shows AI services will fall in price and demand will, therefore, increase (an economic effect known as Jevons paradox, which Microsoft CEO Satya Nadella helpfully shared a link to on Monday). Similarly, you might argue that R1’s launch shows the failure of US policy to limit Chinese tech development via export controls on chips. But, as AI policy researcher Lennart Heim has argued, export controls take time to work and affect not just AI training but deployment across the economy. So, even if export controls don’t stop the launches of flagships systems like R1, they might still help the US retain its technological lead (if that’s the outcome you want).
All of this is to say that the exact effects of R1’s launch are impossible to predict. There are too many complicating factors and too many unknowns to say what the future holds. However, that hasn’t stopped the tech world and markets reacting in a frenzy, with CEOs panicking, stock prices cratering, and analysts scrambling to revise predictions for the sector. And what this really shows is that the world of AI is febrile, unpredictable and overly reactive. This a dangerous combination, and if R1 doesn’t cause a destructive meltdown of this system, it’s likely that some future launch will.
Daily inspiration. Discover more photos at Just for Books…?
27 notes · View notes
myconetted · 11 months ago
Text
to ppl who are currently skeptical about llm (aka "ai") capabilities and benefits for everyday people: i would strongly encourage u to check out this really cool slide deck someone made where they describe a world in which non-software-engineers can make home-grown apps by and for themselves with assistance from llms.
in particular there are a couple of linked tools that are really amazing, including one that gives you a whiteboard interface to draw and describe the app interface you want and then uses gpt4o to write the code for that app.
i think it's also an excellent counter to the argument that llms are basically only going to benefit le capitalisme and corporate overlords, because the technology presented here is actually being used to help users take matters into their own hands. to build their own apps that do what they want the apps to do, and to do it all on their own computers, so none of their private data has to get slurped up by some new startup.
"oh, so i should just let openai slurp up all my data instead? sounds great, asshole" no!!!! that's not what this is suggesting! this is saying you can make your own apps with the llms. then you put your private data in the app that you made, and that app doesn't need chatgpt to work, so literally everything involving your personal data remains on your personal devices.
is this a complete argument for justifying the existence of ai and llms? no! is this a justification for other privacy abuses? also no! does this mean we should all feel totally okay and happy with companies laying off tons of people in order to replace them with llms? 100% no!!!! please continue being mad about that.
just don't let those problems push you towards believing these things don't have genuinely impressive capabilities that can actually help you unlock the ability to do cool things you wouldn't otherwise have the time, energy, or inclination to do.
41 notes · View notes
mariacallous · 9 months ago
Text
Microsoft raced to put generative AI at the heart of its systems. Ask a question about an upcoming meeting and the company’s Copilot AI system can pull answers from your emails, Teams chats, and files—a potential productivity boon. But these exact processes can also be abused by hackers.
Today at the Black Hat security conference in Las Vegas, researcher Michael Bargury is demonstrating five proof-of-concept ways that Copilot, which runs on its Microsoft 365 apps, such as Word, can be manipulated by malicious attackers, including using it to provide false references to files, exfiltrate some private data, and dodge Microsoft’s security protections.
One of the most alarming displays, arguably, is Bargury’s ability to turn the AI into an automatic spear-phishing machine. Dubbed LOLCopilot, the red-teaming code Bargury created can—crucially, once a hacker has access to someone’s work email—use Copilot to see who you email regularly, draft a message mimicking your writing style (including emoji use), and send a personalized blast that can include a malicious link or attached malware.
“I can do this with everyone you have ever spoken to, and I can send hundreds of emails on your behalf,” says Bargury, the cofounder and CTO of security company Zenity, who published his findings alongside videos showing how Copilot could be abused. “A hacker would spend days crafting the right email to get you to click on it, but they can generate hundreds of these emails in a few minutes.”
That demonstration, as with other attacks created by Bargury, broadly works by using the large language model (LLM) as designed: typing written questions to access data the AI can retrieve. However, it can produce malicious results by including additional data or instructions to perform certain actions. The research highlights some of the challenges of connecting AI systems to corporate data and what can happen when “untrusted” outside data is thrown into the mix—particularly when the AI answers with what could look like legitimate results.
Among the other attacks created by Bargury is a demonstration of how a hacker—who, again, must already have hijacked an email account—can gain access to sensitive information, such as people’s salaries, without triggering Microsoft’s protections for sensitive files. When asking for the data, Bargury’s prompt demands the system does not provide references to the files data is taken from. “A bit of bullying does help,” Bargury says.
In other instances, he shows how an attacker—who doesn’t have access to email accounts but poisons the AI’s database by sending it a malicious email—can manipulate answers about banking information to provide their own bank details. “Every time you give AI access to data, that is a way for an attacker to get in,” Bargury says.
Another demo shows how an external hacker could get some limited information about whether an upcoming company earnings call will be good or bad, while the final instance, Bargury says, turns Copilot into a “malicious insider” by providing users with links to phishing websites.
Phillip Misner, head of AI incident detection and response at Microsoft, says the company appreciates Bargury identifying the vulnerability and says it has been working with him to assess the findings. “The risks of post-compromise abuse of AI are similar to other post-compromise techniques,” Misner says. “Security prevention and monitoring across environments and identities help mitigate or stop such behaviors.”
As generative AI systems, such as OpenAI’s ChatGPT, Microsoft’s Copilot, and Google’s Gemini, have developed in the past two years, they’ve moved onto a trajectory where they may eventually be completing tasks for people, like booking meetings or online shopping. However, security researchers have consistently highlighted that allowing external data into AI systems, such as through emails or accessing content from websites, creates security risks through indirect prompt injection and poisoning attacks.
“I think it’s not that well understood how much more effective an attacker can actually become now,” says Johann Rehberger, a security researcher and red team director, who has extensively demonstrated security weaknesses in AI systems. “What we have to be worried [about] now is actually what is the LLM producing and sending out to the user.”
Bargury says Microsoft has put a lot of effort into protecting its Copilot system from prompt injection attacks, but he says he found ways to exploit it by unraveling how the system is built. This included extracting the internal system prompt, he says, and working out how it can access enterprise resources and the techniques it uses to do so. “You talk to Copilot and it’s a limited conversation, because Microsoft has put a lot of controls,” he says. “But once you use a few magic words, it opens up and you can do whatever you want.”
Rehberger broadly warns that some data issues are linked to the long-standing problem of companies allowing too many employees access to files and not properly setting access permissions across their organizations. “Now imagine you put Copilot on top of that problem,” Rehberger says. He says he has used AI systems to search for common passwords, such as Password123, and it has returned results from within companies.
Both Rehberger and Bargury say there needs to be more focus on monitoring what an AI produces and sends out to a user. “The risk is about how AI interacts with your environment, how it interacts with your data, how it performs operations on your behalf,” Bargury says. “You need to figure out what the AI agent does on a user's behalf. And does that make sense with what the user actually asked for.”
25 notes · View notes